Using Opposition-based Learning with Particle Swarm Optimization and Barebones Differential Evolution

نویسنده

  • Mahamed G.H. Omran
چکیده

Particle swarm optimization (PSO) (Kennedy & Eberhart, 1995) and differential evolution (DE) (Storn & Price, 1995) are two stochastic, population-based optimization methods, which have been applied successfully to a wide range of problems as summarized in Engelbrecht (2005) and Price et al. (2005). A number of variations of both PSO and DE have been developed in the past decade to improve the performance of these algorithms (Engelbrecht, 2005; Price et al. 2005). One class of variations includes hybrids between PSO and DE, where the advantages of the two approaches are combined. The barebones DE (BBDE) is a PSO-DE hybrid algorithm proposed by Omran et al. (2007) which combines concepts from the barebones PSO (Kennedy 2003) and the recombination operator of DE. The resulting algorithm eliminates the control parameters of PSO and replaces the static DE control parameters with dynamically changing parameters to produce an almost parameter-free, self-adaptive, optimization algorithm. Recently, opposition-based learning (OBL) was proposed by Tizhoosh (2005) and was successfully applied to several problems (Rahnamayan et al., 2008). The basic concept of OBL is the consideration of an estimate and its corresponding opposite estimate simultaneously to approximate the current candidate solution. Opposite numbers were used by Rahnamayan et al. (2008) to enhance the performance of Differential Evolution. In addition, Han and He (2007) and Wang et al. (2007) used OBL to improve the performance of PSO. However, in both cases, several parameters were added to the PSO that are difficult to tune. Wang et al. (2007) used OBL during swarm initialization and in each iteration with a user-specified probability. In addition, Cauchy mutation is applied to the best particle to avoid being trapping in local optima. Similarly, Han and He (2007) used OBL in the initialization phase and also during each iteration. However, a constriction factor is used to enhance the convergence speed. In this chapter, OBL is used to improve the performance of PSO and BBDE without adding any extra parameter. The performance of the proposed methods is investigated when applied to several benchmark functions. The experiments conducted show that OBL improves the performance of both PSO and BBDE. The remainder of the chapter is organized as follows: PSO is summarized in Section 2. DE is presented in Section 3. Section 4 provides an overview of BBDE. OBL is briefly reviewed in

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Opposition-Based Barebones Particle Swarm for Constrained Nonlinear Optimization Problems

This paper presents a modified barebones particle swarm optimization OBPSO to solve constrained nonlinear optimization problems. The proposed approach OBPSO combines barebones particle swarm optimization BPSO and opposition-based learning OBL to improve the quality of solutions. A novel boundary search strategy is used to approach the boundary between the feasible and infeasible search region. ...

متن کامل

Enhancing particle swarm optimization using generalized opposition-based learning

Particle swarm optimization (PSO) has been shown to yield good performance for solving various optimization problems. However, it tends to suffer from premature convergence when solving complex problems. This paper presents an enhanced PSO algorithm called GOPSO, which employs generalized opposition-based learning (GOBL) and Cauchy mutation to overcome this problem. GOBL can provide a faster co...

متن کامل

Particle Swarm Optimization based on Multiple Swarms and Opposition-based Learning*

Standard particle swarm optimization is easy to fall into local optimum and has the problem of low precision. To solve these problems, the paper proposes an effective approach, called particle swarm optimization based on multiple swarms and opposition-based learning, which divides swarm into two subswarms. The 1st sub-swarm employs PSO evolution model in order to hold the self-learning ability;...

متن کامل

Using CODEQ to Train Feed-forward Neural Networks

CODEQ is a new, population-based meta-heuristic algorithm that is a hybrid of concepts from chaotic search, opposition-based learning, differential evolution and quantum mechanics. CODEQ has successfully been used to solve different types of problems (e.g. constrained, integer-programming, engineering) with excellent results. In this paper, CODEQ is used to train feed-forward neural networks. T...

متن کامل

Opposition-Based Adaptive Fireworks Algorithm

A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012